4 research outputs found

    Least squares residuals and minimal residual methods

    Get PDF
    We study Krylov subspace methods for solving unsymmetric linear algebraic systems that minimize the norm of the residual at each step (minimal residual (MR) methods). MR methods are often formulated in terms of a sequence of least squares (LS) problems of increasing dimension. We present several basic identities and bounds for the LS residual. These results are interesting in the general context of solving LS problems. When applied to MR methods, they show that the size of the MR residual is strongly related to the conditioning of different bases of the same Krylov subspace. Using different bases is useful in theory because relating convergence to the characteristics of different bases offers new insight into the behavior of MR methods. Different bases also lead to different implementations which are mathematically equivalent but can differ numerically. Our theoretical results are used for a finite precision analysis of implementations of the GMRES method [Y. Saad and M. H. Schultz, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856--869]. We explain that the choice of the basis is fundamental for the numerical stability of the implementation. As demonstrated in the case of Simpler GMRES [H. F. Walker and L. Zhou, Numer. Linear Algebra Appl., 1 (1994), pp. 571--581], the best orthogonalization technique used for computing the basis does not compensate for the loss of accuracy due to an inappropriate choice of the basis. In particular, we prove that Simpler GMRES is inherently less numerically stable than the Classical GMRES implementation due to Saad and Schultz [SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856--869]

    Residual Smoothing Techniques: Do They Improve The Limiting Accuracy Of Iterative Solvers?

    No full text
    . Many iterative methods for solving linear systems, in particular the biconjugate gradient (BiCG) method and its \squared" version CGS (or BiCGS), produce often residuals whose norms decrease far from monotonously, but uctuate rather strongly. Large intermediate residuals are known to reduce the ultimately attainable accuracy of the method, unless special measures are taken to counteract this eect. One measure that has been suggested is residual smoothing: by application of simple recurrences, the iterates xn and the corresponding residuals rn : b Axn are replaced by smoothed iterates yn and corresponding residuals sn : b Ayn . We address the question whether the smoothed residuals can ultimately become markedly smaller than the primary ones. To investigate this, we present a roundo error analysis of the smoothing algorithms. It shows that the ultimately attainable accuracy of the smoothed iterates, measured in the norm of the corresponding residuals, is, in general, not higher t..

    HOW TO MAKE SIMPLER GMRES AND GCR MORE STABLE

    Get PDF
    In this paper we analyze the numerical behavior of several minimum residual methods which are mathematically equivalent to the GMRES method. Two main approaches are compared: one that computes the approximate solution in terms of a Krylov space basis from an upper triangular linear system for the coordinates, and one where the approximate solutions are updated with a simple recursion formula. We show that a different choice of the basis can significantly influence the numerical behavior of the resulting implementation. While Simpler GMRES and ORTHODIR are less stable due to the ill-conditioning of the basis used, the residual basis is well-conditioned as long as we have a reasonable residual norm decrease. These results lead to a new implementation, which is conditionally backward stable, and they explain the experimentally observed fact that the GCR method delivers very accurate approximate solutions when it converges fast enough without stagnation
    corecore